An EM algorithm for learning sparse and overcomplete representations
نویسندگان
چکیده
An expectation-maximization (EM) algorithm for learning sparse and overcomplete representations is presented in this paper. We show that the estimation of the conditional moments of the posterior distribution can be accomplished by maximum a posteriori estimation. The approximate conditional moments enable the development of an EM algorithm for learning the overcomplete basis vectors and inferring the most probable basis coe2cients. c © 2003 Elsevier B.V. All rights reserved.
منابع مشابه
A Mixture Model for Learning Sparse Representations
In a latent variable model, an overcomplete representation is one in which the number of latent variables is at least as large as the dimension of the data observations. Overcomplete representations have been advocated due to robustness in the presence of noise, the ability to be sparse, and an inherent flexibility in modeling the structure of data [9]. In this report, we modify factor analysis...
متن کاملTraining sparse natural image models with a fast Gibbs sampler of an extended state space
We present a new learning strategy based on an efficient blocked Gibbs sampler for sparse overcomplete linear models. Particular emphasis is placed on statistical image modeling, where overcomplete models have played an important role in discovering sparse representations. Our Gibbs sampler is faster than general purpose sampling schemes while also requiring no tuning as it is free of parameter...
متن کاملImage Classification via Sparse Representation and Subspace Alignment
Image representation is a crucial problem in image processing where there exist many low-level representations of image, i.e., SIFT, HOG and so on. But there is a missing link across low-level and high-level semantic representations. In fact, traditional machine learning approaches, e.g., non-negative matrix factorization, sparse representation and principle component analysis are employed to d...
متن کاملLearning Data Representations with Sparse Coding Neural Gas
We consider the problem of learning an unknown (overcomplete) basis from an unknown sparse linear combination. Introducing the “sparse coding neural gas” algorithm, we show how to employ a combination of the original neural gas algorithm and Oja’s rule in order to learn a simple sparse code that represents each training sample by a multiple of one basis vector. We generalise this algorithm usin...
متن کاملLearning Nonlinear Overcomplete Representations for Efficient Coding
We derive a learning algorithm for inferring an overcomplete basis by viewing it as probabilistic model of the observed data. Overcomplete bases allow for better approximation of the underlying statistical density. Using a Laplacian prior on the basis coefficients removes redundancy and leads to representations that are sparse and are a nonlinear function of the data. This can be viewed as a ge...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Neurocomputing
دوره 57 شماره
صفحات -
تاریخ انتشار 2004